docs: clarify OpenAI Python parse vs response_format guidance#2884
docs: clarify OpenAI Python parse vs response_format guidance#2884jannikmaierhoefer merged 1 commit intomainfrom
Conversation
openai-python>=1.92.0 graduated parse/stream out of beta. The Langfuse SDK already instruments both client.chat.completions.parse (stable) and client.beta.chat.completions.parse (legacy), so the previous "use response_format with chat.completions.create instead of the Beta API" guidance was outdated. Update the OpenAI Python integration page and the structured output cookbook to recommend the stable parse helper and scope the beta caveat to older SDKs. Reported by David Traina (Ramp) via support. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> Langfuse-Session: https://cloud.langfuse.com/project/cloramnkj0002jz088vzn1ja4/sessions/b55454d8-1e5e-477d-a6ba-f81f65fe959c
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
|
Check out this pull request on See visual diffs & provide feedback on Jupyter Notebooks. Powered by ReviewNB |
|
@claude review |
There was a problem hiding this comment.
Docs-only clarification of the OpenAI Python parse vs response_format guidance — the inline nits are minor wording suggestions and the underlying changes are correct.
Extended reasoning...
Overview
This PR updates the OpenAI Python integration docs (content/integrations/model-providers/openai-py.mdx) and the matching structured-output cookbook (.md and .ipynb) to recommend client.chat.completions.parse(...) for openai>=1.92.0, scope the beta caveat to older SDK versions, and keep the response_format + type_to_response_format_param example as a fallback. It also adds a new parse example with name/metadata Langfuse attributes.
Security risks
None. This change touches only Markdown/MDX/notebook content with no runtime, auth, or user-input handling implications.
Level of scrutiny
Low — documentation-only, no code paths or configuration are affected. The factual claim being introduced (parse graduated out of beta in openai-python v1.92.0 and the Langfuse SDK instruments both the stable and beta parse paths) is consistent with the linked release notes and the rest of the integration docs.
Other factors
The two inline nits posted are wording-level: (1) the new bullet groups langfuse_session_id alongside direct kwargs even though it is a metadata key, and (2) the #### Structured Output subsection now lives under an ### OpenAI Beta APIs parent whose intro still says beta APIs require manual @observe() wrapping. Neither is incorrect documentation per se — the canonical 'Custom trace properties' table and a correct metadata={...} example are right above and below the new prose — and a Vercel preview is already building for visual verification. These are the kind of small editorial tweaks a maintainer can take or leave; they don't gate approval.
Summary
client.chat.completions.parse(...)foropenai>=1.92.0and scope the beta caveat to older SDK versions.openai.resources.chat.completions.Completions.parse) and the legacy beta path, so Langfuse attributes (name,metadata,langfuse_session_id, …) work on either.response_format+type_to_response_format_paramexample as a fallback for users who cannot upgradeopenai.Why
Reported by David Traina (Ramp) in Pylon #1339. OpenAI moved
parse/streamout of beta in openai-python v1.92.0 ~10 months ago, but our docs still warned against the beta API and pushed users toresponse_format+chat.completions.create. The SDK has already supported the stable path for a while — only the docs were stale.Test plan
pnpm devand verify the Structured Output section on/integrations/model-providers/openai-pyrenders correctly./guides/cookbook/integration_openai_structured_outputpage renders the updated note andparseexample.🤖 Generated with Claude Code
Disclaimer: Experimental PR review
Greptile Summary
This PR corrects stale documentation that incorrectly told users to avoid
client.chat.completions.parsein favour ofresponse_format+create. It updates both the integration page and the cookbook to recommend the stableparseAPI (available sinceopenai-pythonv1.92.0) and preserves atype_to_response_format_paramfallback for users who cannot upgrade.Confidence Score: 4/5
Safe to merge — documentation-only changes with accurate technical content and only minor style observations.
All three files are docs/notebook updates with no runtime code. The guidance is factually correct. The only findings are P2: a private-API import risk in the legacy fallback (pre-existing pattern, not introduced here) and mildly ambiguous phrasing in one note.
No files require special attention; the private import in
openai-py.mdxis worth a comment but is not blocking.Important Files Changed
parseAPI (openai>=1.92.0) and retaintype_to_response_format_paramas a legacy fallback; the fallback imports from a private internal module (openai.lib._parsing._completions).parsepaths are instrumented;Alternativesection switched fromclient.beta.chat.completions.parseto stableclient.chat.completions.parsewith a Langfusenameattribute; phrasing in the note is slightly ambiguous.parsepath,nameattribute added, old output cells preserved.Flowchart
%%{init: {'theme': 'neutral'}}%% flowchart TD A[User wants Structured Output\nwith Langfuse tracing] --> B{openai SDK version?} B -- ">=1.92.0\n(recommended)" --> C["client.chat.completions.parse(...)\nresponse_format=PydanticModel\nname='...' metadata={...}"] B -- "<1.92.0\n(legacy)" --> D{Pydantic model needed?} D -- Yes --> E["client.beta.chat.completions.parse(...)\nresponse_format=PydanticModel\n(re-routed to stable on >=1.92.0)"] D -- No / can't upgrade --> F["type_to_response_format_param(Model)\n→ client.chat.completions.create(...)\nresponse_format=schema_dict"] C --> G[Langfuse traces both name\nand metadata attributes ✓] E --> G F --> GPrompt To Fix All With AI
Reviews (1): Last reviewed commit: "docs: clarify OpenAI Python parse vs res..." | Re-trigger Greptile